Goto

Collaborating Authors

 security challenge


The Curious Case of the Bizarre, Disappearing Captcha

WIRED

While puzzling captchas--from dogs in hats to sliding jockstraps--still exist, most bot-deterring challenges have vanished into the background. As I browse the web in 2025, I rarely encounter captchas anymore. There's no slanted text to discern. No image grid of stoplights to identify. And on the rare occasion that I am asked to complete some bot-deterring task, the experience almost always feels surreal.


Explainable AI for Securing Healthcare in IoT-Integrated 6G Wireless Networks

Kaur, Navneet, Gupta, Lav

arXiv.org Artificial Intelligence

As healthcare systems increasingly rely on advanced wireless networks and connected devices, ensuring the security of medical applications has become a critical concern. The integration of Internet of Medical Things (IoMT) devices with real - time health monitoring and care delivery has revolutionized patient care but has also introduced new security vulnerabilities. Each connected device, whether it is a part of a robotic surgical arm, intensive care equipment, or a wearable health monitor, serves as a poten tial entry point for cyberattacks. Such vulnerabilities could lead to life threatening consequences like poorly performed surgeries, malfunctioning of life support systems or incorrect treatment due to data breache s . The ITU IMT - 2030 framework envisions that 6G will be transforming healthcare through massive connectivity, AI, and cloud integration. However, it may also introduce new security vulnerabilities that can threaten the patient safety and privacy. Therefore, a ddressing these threats requires a thor ough reassessment of security measures . This paper presents an innovative use of explainable AI (XAI) techniques - such as SHAP, LIME, and DiCE - to identify vulnerabilities, strengthen security measures, and enhance both security and transparency within the 6G healthcare ecosystem, ensuring robust protection and trust . In addition to the theoretical background, this paper presents experimental analysis and the authors very positive findings.


Deep Learning Under Siege: Identifying Security Vulnerabilities and Risk Mitigation Strategies

Al-Karaki, Jamal, Khan, Muhammad Al-Zafar, Mohamad, Mostafa, Chowdhury, Dababrata

arXiv.org Artificial Intelligence

With the rise in the wholesale adoption of Deep Learning (DL) models in nearly all aspects of society, a unique set of challenges is imposed. Primarily centered around the architectures of these models, these risks pose a significant challenge, and addressing these challenges is key to their successful implementation and usage in the future. In this research, we present the security challenges associated with the current DL models deployed into production, as well as anticipate the challenges of future DL technologies based on the advancements in computing, AI, and hardware technologies. In addition, we propose risk mitigation techniques to inhibit these challenges and provide metrical evaluations to measure the effectiveness of these metrics.


SecGenAI: Enhancing Security of Cloud-based Generative AI Applications within Australian Critical Technologies of National Interest

Haryanto, Christoforus Yoga, Vu, Minh Hieu, Nguyen, Trung Duc, Lomempow, Emily, Nurliana, Yulia, Taheri, Sona

arXiv.org Artificial Intelligence

The rapid advancement of Generative AI (GenAI) technologies offers transformative opportunities within Australia's critical technologies of national interest while introducing unique security challenges. This paper presents SecGenAI, a comprehensive security framework for cloud-based GenAI applications, with a focus on Retrieval-Augmented Generation (RAG) systems. SecGenAI addresses functional, infrastructure, and governance requirements, integrating end-to-end security analysis to generate specifications emphasizing data privacy, secure deployment, and shared responsibility models. Aligned with Australian Privacy Principles, AI Ethics Principles, and guidelines from the Australian Cyber Security Centre and Digital Transformation Agency, SecGenAI mitigates threats such as data leakage, adversarial attacks, and model inversion. The framework's novel approach combines advanced machine learning techniques with robust security measures, ensuring compliance with Australian regulations while enhancing the reliability and trustworthiness of GenAI systems. This research contributes to the field of intelligent systems by providing actionable strategies for secure GenAI implementation in industry, fostering innovation in AI applications, and safeguarding national interests.


AI poses national security threat, warns terror watchdog

The Guardian

The creators of artificial intelligence need to abandon their "tech utopian" mindset, according to the terror watchdog, amid fears that the new technology could be used to groom vulnerable individuals. Jonathan Hall KC, whose role is to review the adequacy of terrorism legislation, said the national security threat from AI was becoming ever more apparent and the technology needed to be designed with the intentions of terrorists firmly in mind. He said too much AI development focused on the potential positives of the technology while neglecting to consider how terrorists might use it to carry out attacks. "They need to have some horrible little 15-year-old neo-Nazi in the room with them, working out what they might do. You've got to hardwire the defences against what you know people will do with it," said Hall.


New Research Points to Hidden Vulnerabilities Within Machine Learning Systems

#artificialintelligence

Government agencies collect a lot of data, and have access to even more of it in their archives. The trick has always been trying to tap into that store of information to improve decision-making, which is a major focus in government these days. The President's Management Agenda, for example, emphasizes the importance of data-driven decision-making to improve federal services. The volume of data that most agencies are working with is such that humans can't easily tap into it for help with that decision-making. And even if they can perform searches into that data, the process is slow.


Adopting MLSecOps for secure machine learning at scale

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Given the complexity, sensitivity and scale of the typical enterprise's software stack, security has naturally always been a central concern for most IT teams. But in addition to the well-known security challenges faced by devops teams, organizations also need to consider a new source of security challenges: machine learning (ML). ML adoption is skyrocketing in every sector, with McKinsey finding that by the end of last year, 56% of businesses had adopted ML in at least one business function. However, in the race to adoption, many are encountering the distinct security challenges that come with ML, along with challenges in deploying and leveraging ML responsibly.


Doing good by fighting fraud: Ethical anti-fraud systems for mobile payments

Din, Zainul Abi, Venugopalan, Hari, Lin, Henry, Wushensky, Adam, Liu, Steven, King, Samuel T.

arXiv.org Artificial Intelligence

App builders commonly use security challenges, a form of step-up authentication, to add security to their apps. However, the ethical implications of this type of architecture has not been studied previously. In this paper, we present a large-scale measurement study of running an existing anti-fraud security challenge, Boxer, in real apps running on mobile devices. We find that although Boxer does work well overall, it is unable to scan effectively on devices that run its machine learning models at less than one frame per second (FPS), blocking users who use inexpensive devices. With the insights from our study, we design Daredevil, anew anti-fraud system for scanning payment cards that work swell across the broad range of performance characteristics and hardware configurations found on modern mobile devices. Daredevil reduces the number of devices that run at less than one FPS by an order of magnitude compared to Boxer, providing a more equitable system for fighting fraud. In total, we collect data from 5,085,444 real devices spread across 496 real apps running production software and interacting with real users.


4 ways AI can help us enter a new age of cybersecurity

#artificialintelligence

Global catastrophes have historically brought moments of truth for all fields of business. In such times, their inner workings, strengths and weaknesses are laid bare for the whole world to see, as organizations rapidly alter their processes to come to terms with the new reality. Businesses that can make bold moves during such challenging times can quickly turn the misfortune into a benefit. So early indications are that businesses that value information as a currency, and have been quick to adapt machine learning and advanced data analytics, have emerged better from the economic aftermath of the pandemic. The coronavirus pandemic that continues to ravage the world has forced small businesses into building online ventures.


New challenges for AI, data privacy and the 5G hackathon

#artificialintelligence

There is a trade-off between technology innovation and security. The adoption of emerging technologies like 5G will fuel the proliferation of Internet of Things (IoT) which are often built with basic security controls, creating a larger attack surface. At the same time, reliance on data means that data breaches can cause greater damage.